19 research outputs found

    Decidability Results for Multi-objective Stochastic Games

    Full text link
    We study stochastic two-player turn-based games in which the objective of one player is to ensure several infinite-horizon total reward objectives, while the other player attempts to spoil at least one of the objectives. The games have previously been shown not to be determined, and an approximation algorithm for computing a Pareto curve has been given. The major drawback of the existing algorithm is that it needs to compute Pareto curves for finite horizon objectives (for increasing length of the horizon), and the size of these Pareto curves can grow unboundedly, even when the infinite-horizon Pareto curve is small. By adapting existing results, we first give an algorithm that computes the Pareto curve for determined games. Then, as the main result of the paper, we show that for the natural class of stopping games and when there are two reward objectives, the problem of deciding whether a player can ensure satisfaction of the objectives with given thresholds is decidable. The result relies on intricate and novel proof which shows that the Pareto curves contain only finitely many points. As a consequence, we get that the two-objective discounted-reward problem for unrestricted class of stochastic games is decidable.Comment: 35 page

    Robust Multidimensional Mean-Payoff Games are Undecidable

    Full text link
    Mean-payoff games play a central role in quantitative synthesis and verification. In a single-dimensional game a weight is assigned to every transition and the objective of the protagonist is to assure a non-negative limit-average weight. In the multidimensional setting, a weight vector is assigned to every transition and the objective of the protagonist is to satisfy a boolean condition over the limit-average weight of each dimension, e.g., \LimAvg(x_1) \leq 0 \vee \LimAvg(x_2)\geq 0 \wedge \LimAvg(x_3) \geq 0. We recently proved that when one of the players is restricted to finite-memory strategies then the decidability of determining the winner is inter-reducible with Hilbert's Tenth problem over rationals (a fundamental long-standing open problem). In this work we allow arbitrary (infinite-memory) strategies for both players and we show that the problem is undecidable

    Abstract Interpretation of Stateful Networks

    Full text link
    Modern networks achieve robustness and scalability by maintaining states on their nodes. These nodes are referred to as middleboxes and are essential for network functionality. However, the presence of middleboxes drastically complicates the task of network verification. Previous work showed that the problem is undecidable in general and EXPSPACE-complete when abstracting away the order of packet arrival. We describe a new algorithm for conservatively checking isolation properties of stateful networks. The asymptotic complexity of the algorithm is polynomial in the size of the network, albeit being exponential in the maximal number of queries of the local state that a middlebox can do, which is often small. Our algorithm is sound, i.e., it can never miss a violation of safety but may fail to verify some properties. The algorithm performs on-the fly abstract interpretation by (1) abstracting away the order of packet processing and the number of times each packet arrives, (2) abstracting away correlations between states of different middleboxes and channel contents, and (3) representing middlebox states by their effect on each packet separately, rather than taking into account the entire state space. We show that the abstractions do not lose precision when middleboxes may reset in any state. This is encouraging since many real middleboxes reset, e.g., after some session timeout is reached or due to hardware failure

    Multitask Efficiencies in the Decision Tree Model

    Get PDF
    In Direct Sum problems [KRW], one tries to show that for a given computational model, the complexity of computing a collection of finite functions on independent inputs is approximately the sum of their individual complexities. In this paper, by contrast, we study the diversity of ways in which the joint computational complexity can behave when all the functions are evaluated on a common input. We focus on the deterministic decision tree model, with depth as the complexity measure; in this model we prove a result to the effect that the 'obvious' constraints on joint computational complexity are essentially the only ones. The proof uses an intriguing new type of cryptographic data structure called a `mystery bin' which we construct using a small polynomial separation between deterministic and unambiguous query complexity shown by Savicky. We also pose a variant of the Direct Sum Conjecture of [KRW] which, if proved for a single family of functions, could yield an analogous result for models such as the communication model.Comment: Improved exposition based on conference versio

    Ergodic mean-payo games for the analysis of attacks in crypto-currencies

    Get PDF
    Crypto-currencies are digital assets designed to work as a medium of exchange, e.g., Bitcoin, but they are susceptible to attacks (dishonest behavior of participants). A framework for the analysis of attacks in crypto-currencies requires (a) modeling of game-theoretic aspects to analyze incentives for deviation from honest behavior; (b) concurrent interactions between participants; and (c) analysis of long-term monetary gains. Traditional game-theoretic approaches for the analysis of security protocols consider either qualitative temporal properties such as safety and termination, or the very special class of one-shot (stateless) games. However, to analyze general attacks on protocols for crypto-currencies, both stateful analysis and quantitative objectives are necessary. In this work our main contributions are as follows: (a) we show how a class of concurrent mean-payo games, namely ergodic games, can model various attacks that arise naturally in crypto-currencies; (b) we present the first practical implementation of algorithms for ergodic games that scales to model realistic problems for crypto-currencies; and (c) we present experimental results showing that our framework can handle games with thousands of states and millions of transitions

    Computer aided synthesis: a game theoretic approach

    Full text link
    In this invited contribution, we propose a comprehensive introduction to game theory applied in computer aided synthesis. In this context, we give some classical results on two-player zero-sum games and then on multi-player non zero-sum games. The simple case of one-player games is strongly related to automata theory on infinite words. All along the article, we focus on general approaches to solve the studied problems, and we provide several illustrative examples as well as intuitions on the proofs.Comment: Invitation contribution for conference "Developments in Language Theory" (DLT 2017

    LNCS

    Get PDF
    A controller is a device that interacts with a plant. At each time point,it reads the plant’s state and issues commands with the goal that the plant oper-ates optimally. Constructing optimal controllers is a fundamental and challengingproblem. Machine learning techniques have recently been successfully applied totrain controllers, yet they have limitations. Learned controllers are monolithic andhard to reason about. In particular, it is difficult to add features without retraining,to guarantee any level of performance, and to achieve acceptable performancewhen encountering untrained scenarios. These limitations can be addressed bydeploying quantitative run-timeshieldsthat serve as a proxy for the controller.At each time point, the shield reads the command issued by the controller andmay choose to alter it before passing it on to the plant. We show how optimalshields that interfere as little as possible while guaranteeing a desired level ofcontroller performance, can be generated systematically and automatically usingreactive synthesis. First, we abstract the plant by building a stochastic model.Second, we consider the learned controller to be a black box. Third, we mea-surecontroller performanceandshield interferenceby two quantitative run-timemeasures that are formally defined using weighted automata. Then, the problemof constructing a shield that guarantees maximal performance with minimal inter-ference is the problem of finding an optimal strategy in a stochastic2-player game“controller versus shield” played on the abstract state space of the plant with aquantitative objective obtained from combining the performance and interferencemeasures. We illustrate the effectiveness of our approach by automatically con-structing lightweight shields for learned traffic-light controllers in various roadnetworks. The shields we generate avoid liveness bugs, improve controller per-formance in untrained and changing traffic situations, and add features to learnedcontrollers, such as giving priority to emergency vehicles

    Extending Finite Memory Determinacy to Multiplayer Games

    Get PDF
    We show that under some general conditions the finite memory determinacy of a class of two-player win/lose games played on finite graphs implies the existence of a Nash equilibrium built from finite memory strategies for the corresponding class of multi-player multi-outcome games. This generalizes a previous result by Brihaye, De Pril and Schewe. For most of our conditions we provide counterexamples showing that they cannot be dispensed with. Our proofs are generally constructive, that is, provide upper bounds for the memory required, as well as algorithms to compute the relevant winning strategies.Comment: In Proceedings SR 2016, arXiv:1607.0269

    Bounding Average-Energy Games

    Get PDF
    We consider average-energy games, where the goal is to minimize the long-run average of the accumulated energy. While several results have been obtained on these games recently, decidability of average-energy games with a lower-bound constraint on the energy level (but no upper bound) remained open; in particular, so far there was no known upper bound on the memory that is required for winning strategies. By reducing average-energy games with lower-bounded energy to infinite-state mean-payoff games and analyzing the density of low-energy configurations, we show an almost tight doubly-exponential upper bound on the necessary memory, and prove that the winner of average-energy games with lower-bounded energy can be determined in doubly-exponential time. We also prove EXPSPACE-hardness of this problem. Finally, we consider multi-dimensional extensions of all types of average-energy games: without bounds, with only a lower bound, and with both a lower and an upper bound on the energy. We show that the fully-bounded version is the only case to remain decidable in multiple dimensions.SCOPUS: cp.kinfo:eu-repo/semantics/publishe

    Visibly pushdown modular games

    No full text
    Recursive game graphs can be used to reason about the control flow of sequential programs with recursion. Here, the most natural notion of strategy is the modular one, i.e., a strategy that is local to a module and is oblivious to previous module invocations. In this work, we study for the first time modular strategies with respect to winning conditions expressed as languages of pushdown automata. We show that pushdown modular games are undecidable in general, and become decidable for visibly pushdown automata specifications. Our solution relies on a reduction to modular games with finite-state winning conditions. We carefully characterize the computational complexity of this decision problem by also considering as winning conditions nondeterministic and universal BĂŒchi or co-BĂŒchi visibly pushdown automata, and CARET or NWTL temporal logic formulas. As a further contribution, we present a different synthesis algorithm that improves on known solutions for large specifications and many exits
    corecore